Evaluating Embedded Parsers
نویسندگان
چکیده
We need to build parsers that are robust in the face of many types of parsing challenges, and evaluate them for the accuracy and transparency of their results, and how well the parser scales up to practically-sized problems. We describe a number of ways to measure accuracy, transparency and scale-up, in the context of evaluating the parser in the Casper system, a tutorial for training customer service representatives in complex problem solving.
منابع مشابه
Generalized Bottom Up Parsers With Reduced Stack Activity
We describe a generalized bottom up parser in which non-embedded recursive rules are handled directly by the underlying automaton, thus limiting stack activity to the activation of rules displaying embedded recursion. Our strategy is motivated by Aycock and Horspool’s approach, but uses a different automaton construction and leads to parsers that are correct for all context-free grammars, inclu...
متن کاملImplementing a Small Parsing Virtual Machine on Embedded Systems
PEGs are a formal grammar foundation for describing syntax, and are not hard to generate parsers with a plain recursive decent parsing. However, the large amount of C-stack consumption in the recursive parsing is not acceptable especially in resource-restricted embedded systems. Alternatively, we have attempted the machine virtualization approach to PEG-based parsing. MiniNez, our implemented v...
متن کاملActions Speak Louder than Words: Evaluating Parsers in the Context of Natural Language Understanding Systems for Human-Robot Interaction
The standard ParsEval metrics alone are often not sufficient for evaluating parsers integrated in natural language understanding systems. We propose to augment intrinsic parser evaluations by extrinsic measures in the context of human-robot interaction using a corpus from a human cooperative search task. We compare a constituent with a dependency parser on both intrinsic and extrinsic measures ...
متن کاملCross-Framework Evaluation for Statistical Parsing
A serious bottleneck of comparative parser evaluation is the fact that different parsers subscribe to different formal frameworks and theoretical assumptions. Converting outputs from one framework to another is less than optimal as it easily introduces noise into the process. Here we present a principled protocol for evaluating parsing results across frameworks based on function trees, tree gen...
متن کاملEvaluating Induced CCG Parsers on Grounded Semantic Parsing
We compare the effectiveness of four different syntactic CCG parsers for a semantic slotfilling task to explore how much syntactic supervision is required for downstream semantic analysis. This extrinsic, task-based evaluation also provides a unique window into the semantics captured (or missed) by unsupervised grammar induction systems.
متن کامل